Goto

Collaborating Authors

 learning model interpretability and explainability


Machine Learning Model Interpretability and Explainability

#artificialintelligence

ML/AI models are getting more complex and challenging to interpret and explain. A simple, easy-to-explain regression or decision tree model can no longer fully satisfy technical and business needs. More and more people use ensemble methods and deep neural networks to get better predictions and accuracy. However, those more complex models are hard to explain, debug, and understand. Thus, many people call these models black-box models.